Skip to content

Conversation

AlexsanderHamir
Copy link
Collaborator

Title

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🧹 Refactoring

Changes

  • Replaced linear provider lookup with O(1) dispatcher table.
  • Migrate 1 provider and its models as an example.

Test Results

Screenshot 2025-10-06 at 12 46 48 PM
  • The failures appear to be unrelated, one of them I fixed on a previous PR, which was causing many tests to fail.

AlexsanderHamir and others added 7 commits October 6, 2025 12:38
Replace linear if-else provider routing with O(1) dispatcher lookup.
Migrate 12 providers to new architecture, achieving significant speedup.
…ST API

Add support for function calling (tools) with Snowflake Cortex models that support it (e.g., Claude 3.5 Sonnet).

Changes:
- Add 'tools' and 'tool_choice' to supported OpenAI parameters
- Implement request transformation: OpenAI function format → Snowflake tool_spec format
- Implement response transformation: Snowflake content_list with tool_use → OpenAI tool_calls
- Add tool_choice transformation: OpenAI nested format → Snowflake array format

Request transformation:
- Transform tools from nested {"type": "function", "function": {...}} to Snowflake's {"tool_spec": {"type": "generic", "name": "...", "input_schema": {...}}}
- Transform tool_choice from {"type": "function", "function": {"name": "..."}} to {"type": "tool", "name": ["..."]}

Response transformation:
- Parse Snowflake's content_list array containing tool_use objects
- Extract tool calls with tool_use_id, name, and input
- Convert to OpenAI's tool_calls format with proper JSON serialization

Testing:
- Add 7 unit tests covering request/response transformations
- Add integration test for Responses API with tool calling
- All tests passing

Fixes issue #15218

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
)

* perf(router): add model_name index for O(1) deployment lookups

Add model_name_to_deployment_indices mapping to optimize _get_all_deployments()
from O(n) to O(1) + O(k) lookups.

- Add model_name_to_deployment_indices: Dict[str, List[int]]
- Add _build_model_name_index() to build/maintain the index
- Update _add_model_to_list_and_index_map() to maintain both indices
- Refactor to use idx = len(self.model_list) before append (cleaner)
- Optimize _get_all_deployments() to use index instead of linear scan

* test(router): add test coverage for _build_model_name_index

Add single comprehensive test for _build_model_name_index() function to fix
code coverage CI failure.

The test verifies:
- Index correctly maps model_name to deployment indices
- Handles multiple deployments per model_name
- Clears and rebuilds index correctly

Fixes: CI code coverage error for _build_model_name_index
Replace linear if-else provider routing with O(1) dispatcher lookup.
Migrate 12 providers to new architecture, achieving significant speedup.

update docs

update docs
Copy link

vercel bot commented Oct 6, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Oct 6, 2025 7:52pm

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need a new 'dispatch' here vs. just using the providerconfig.get_chat_provider_config?

def get_provider_chat_config( # noqa: PLR0915

Reference implementation on rerank:

rerank_provider_config: BaseRerankConfig = (

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 can we just re-use the existing OpenaI config file to implement the dispatcher ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea, I hadn't seen that on the codebase.

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 can we just re-use the existing OpenaI config file to implement the dispatcher ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants